Goto

Collaborating Authors

 intel cpu


Deci and Intel look to optimise deep learning inference

#artificialintelligence

The deep learning company, Deci, has announced a broad strategic business and technology collaboration with Intel to optimise deep learning inference on Intel Architecture (IA) CPUs. As one of the first companies to participate in the Intel Ignite startup accelerator, Deci will now work with Intel to deploy innovative AI technologies to mutual customers. The collaboration is intended to take a significant step towards enabling deep learning inference at scale on Intel CPUs, reducing costs and latency, and enabling new applications of deep learning inference. New deep learning tasks can be performed in a real-time environment on edge devices and companies that use large scale inference scenarios can dramatically cut cloud or datacentre cost, simply by changing the inference hardware from GPU to Intel CPU. "By optimising the AI models that run on Intel's hardware, Deci enables customers to get even more speed and will allow for cost-effective and more general deep learning use cases on Intel CPUs," said Deci CEO and co-founder Yonatan Geifman.


Intel hooks up with Deci for deep learning

#artificialintelligence

As one of the first companies to participate in Intel Ignite startup accelerator, Deci will now work with Intel to deploy innovative AI technologies to mutual customers. The collaboration takes helps enable deep learning inference at scale on Intel CPUs, reducing costs and latency, and enabling new applications of deep learning inference. New deep learning tasks can be performed in a real-time environment on edge devices and companies that use large scale inference scenarios can dramatically cut cloud or datacenter cost, simply by changing the inference hardware from GPU to Intel CPU. "By optimizing the AI models that run on Intel's hardware, Deci enables customers to get even more speed and will allow for cost-effective and more general deep learning use cases on Intel CPUs," says Deci CEO and co-founder Yonatan Geifman. Deci and Intel's collaboration began with MLPerf where on several Intel CPUs, Deci's AutoNAC (Automated Neural Architecture Construction) technology accelerated the inference speed of the well-known ResNet-50 neural network, reducing the submitted models' latency by a factor of up to 11.8x and increasing throughput by up to 11x.


Ansor : Generating High-Performance Tensor Programs for Deep Learning

Zheng, Lianmin, Jia, Chengfan, Sun, Minmin, Wu, Zhao, Yu, Cody Hao, Haj-Ali, Ameer, Wang, Yida, Yang, Jun, Zhuo, Danyang, Sen, Koushik, Gonzalez, Joseph E., Stoica, Ion

arXiv.org Machine Learning

High-performance tensor programs are crucial to guarantee efficient execution of deep neural networks. However, obtaining performant tensor programs for different operators on various hardware platforms is notoriously challenging. Currently, deep learning systems rely on vendor-provided kernel libraries or various search strategies to get performant tensor programs. These approaches either require significant engineering effort to develop platform-specific optimization code or fall short of finding high-performance programs due to restricted search space and ineffective exploration strategy. We present Ansor, a tensor program generation framework for deep learning applications. Compared with existing search strategies, Ansor explores many more optimization combinations by sampling programs from a hierarchical representation of the search space. Ansor then fine-tunes the sampled programs with evolutionary search and a learned cost model to identify the best programs. Ansor can find high-performance programs that are outside the search space of existing state-of-the-art approaches. In addition, Ansor utilizes a task scheduler to simultaneously optimize multiple subgraphs in deep neural networks. We show that Ansor improves the execution performance of deep neural networks relative to the state-of-the-art on the Intel CPU, ARM CPU, and NVIDIA GPU by up to $3.8\times$, $2.6\times$, and $1.7\times$, respectively.


Microsoft Knows Exactly Where Intel's Future Is

AITopics Original Links

This week, Microsoft researcher Doug Burger received more than his usual share of email. On Monday, Intel told the world it was spending $16.7 billion in cash to acquire a company called Altera. And perhaps more than anyone, Burger understands why this deal makes sense for the world's largest chip maker. At Microsoft, he cooked up a new way of powering the company's Bing search engine using the low-power programmable chips sold by Altera, pairing them with traditional microprocessors from Intel. Asked how he views the Intel acquisition, Burger is understandably coy.